Dynamic Scaling using a token based implementation#603
Dynamic Scaling using a token based implementation#603
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #603 +/- ##
==========================================
+ Coverage 89.19% 89.24% +0.05%
==========================================
Files 23 23
Lines 7579 7618 +39
==========================================
+ Hits 6760 6799 +39
Misses 819 819
🚀 New features to boost your workflow:
|
source/s3_client.c
Outdated
| s_s3_client_meta_request_finished_request(client, meta_request, request, AWS_ERROR_S3_CANCELED); | ||
| request = aws_s3_request_release(request); | ||
| } else if ((uint32_t)aws_atomic_load_int(&meta_request->num_requests_network) < max_active_connections) { | ||
| } else if (s_s3_client_acquire_tokens(client, request)) { |
There was a problem hiding this comment.
I think we should still respect the max active connections from settings. And then apply the token limitation.
There was a problem hiding this comment.
The acquire tokens checks for the max active connection limits at both the client and meta request level before allocating tokens. Should that maybe be a separate function?
source/s3_client.c
Outdated
|
|
||
| uint32_t tokens = 0; | ||
|
|
||
| switch (request->request_type) { |
There was a problem hiding this comment.
why are we calculate the token used again here? Should the request keep the token it used?
source/s3_client.c
Outdated
| /* Currently the ideal part size is 8MB and hence the value set. | ||
| * However, this is subject to change due to newer part sizes and adjustments. */ | ||
|
|
||
| const uint32_t s_s3_minimum_tokens = S_IDEAL_PART_SIZE * 8 * S_S3_CLIENT_MINIMUM_CONCURRENT_REQUESTS; |
There was a problem hiding this comment.
I don't understand.
- why the
S_S3_CLIENT_MINIMUM_CONCURRENT_REQUESTSis 8? - why we multiply those to get the minimum token?
- What does the minimum token represent?
source/s3_client.c
Outdated
| *(uint32_t *)&client->ideal_connection_count = aws_max_u32( | ||
| g_min_num_connections, s_get_ideal_connection_number_from_throughput(client->throughput_target_gbps)); | ||
|
|
||
| s_s3_client_init_tokens(client, client->throughput_target_gbps); |
There was a problem hiding this comment.
let's add the token usage to the logs from /* Step 4: Log client stats. */ so that we have better track of it.
source/s3_client.c
Outdated
| } | ||
|
|
||
| /* Initialize token bucket based on target throughput */ | ||
| void s_s3_client_init_tokens(struct aws_s3_client *client) { |
There was a problem hiding this comment.
add static to the static functions.
| void s_s3_client_init_tokens(struct aws_s3_client *client) { | |
| static void s_s3_client_init_tokens(struct aws_s3_client *client) { |
source/s3_client.c
Outdated
| /* Checks to ensure we are not violating user configured connection limits althought we are dynamically increasing | ||
| * and decreasing connections */ | ||
| bool s_check_connection_limits(struct aws_s3_client *client, struct aws_s3_request *request) { |
There was a problem hiding this comment.
keep the same logic at the same place.
We have a aws_s3_client_get_max_active_connections function to get the limit from customer settings about the meta request, reuse the function instead of writing a new one.
source/s3_client.c
Outdated
| void s_s3_client_init_tokens(struct aws_s3_client *client) { | ||
| AWS_PRECONDITION(client); | ||
|
|
||
| aws_atomic_store_int(&client->token_bucket, (uint32_t)client->throughput_target_gbps * 1024); |
There was a problem hiding this comment.
check for overflow. The throughput_target_gbps is customer provided, it could be anything. Check if this multiply will cause overflow.
Issue #, if available:
Description of changes:
Implement a token based connection management system to achieve a more accurate usage of connections for S3 and S3 Express.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.